interpretable ai
Interpretable AI for Time-Series: Multi-Model Heatmap Fusion with Global Attention and NLP-Generated Explanations
Francis, Jiztom Kavalakkatt, Darr, Matthew J
In this paper, we present a novel framework for enhancing model interpretability by integrating heatmaps produced separately by ResNet and a restructured 2D Transformer with globally weighted input saliency. We address the critical problem of spatial-temporal misalignment in existing interpretability methods, where convolutional networks fail to capture global context and Transformers lack localized precision - a limitation that impedes actionable insights in safety-critical domains like healthcare and industrial monitoring. Our method merges gradient-weighted activation maps (ResNet) and Transformer attention rollout into a unified visualization, achieving full spatial-temporal alignment while preserving real-time performance. Empirical evaluations on clinical (ECG arrhythmia detection) and industrial (energy consumption prediction) datasets demonstrate significant improvements: the hybrid framework achieves 94.1% accuracy (F1 0.93) on the PhysioNet dataset and reduces regression error to RMSE = 0.28 kWh (R2 = 0.95) on the UCI Energy Appliance dataset-outperforming standalone ResNet, Transformer, and InceptionTime baselines by 3.8-12.4%. An NLP module translates fused heatmaps into domain-specific narratives (e.g., "Elevated ST-segment between 2-4 seconds suggests myocardial ischemia"), validated via BLEU-4 (0.586) and ROUGE-L (0.650) scores. By formalizing interpretability as causal fidelity and spatial-temporal alignment, our approach bridges the gap between technical outputs and stakeholder understanding, offering a scalable solution for transparent, time-aware decision-making.
- North America > United States > Kentucky > Fayette County > Lexington (0.14)
- North America > United States > Iowa > Story County > Ames (0.04)
- North America > United States > Ohio > Franklin County > Columbus (0.04)
- (4 more...)
- Research Report > Experimental Study (0.68)
- Research Report > New Finding (0.46)
- Information Technology (1.00)
- Health & Medicine > Therapeutic Area > Cardiology/Vascular Diseases (1.00)
- Health & Medicine > Diagnostic Medicine (1.00)
- Energy (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Explanation & Argumentation (0.68)
- (3 more...)
Dataset | Mindset = Explainable AI | Interpretable AI
Wu, Caesar, Buyya, Rajkumar, Li, Yuan Fang, Bouvry, Pascal
We often use "explainable" Artificial Intelligence (XAI)" and "interpretable AI (IAI)" interchangeably when we apply various XAI tools for a given dataset to explain the reasons that underpin machine learning (ML) outputs. However, these notions can sometimes be confusing because interpretation often has a subjective connotation, while explanations lean towards objective facts. We argue that XAI is a subset of IAI. The concept of IAI is beyond the sphere of a dataset. It includes the domain of a mindset. At the core of this ambiguity is the duality of reasons, in which we can reason either outwards or inwards. When directed outwards, we want the reasons to make sense through the laws of nature. When turned inwards, we want the reasons to be happy, guided by the laws of the heart. While XAI and IAI share reason as the common notion for the goal of transparency, clarity, fairness, reliability, and accountability in the context of ethical AI and trustworthy AI (TAI), their differences lie in that XAI emphasizes the post-hoc analysis of a dataset, and IAI requires a priori mindset of abstraction. This hypothesis can be proved by empirical experiments based on an open dataset and harnessed by High-Performance Computing (HPC). The demarcation of XAI and IAI is indispensable because it would be impossible to determine regulatory policies for many AI applications, especially in healthcare, human resources, banking, and finance. We aim to clarify these notions and lay the foundation of XAI, IAI, EAI, and TAI for many practitioners and policymakers in future AI applications and research.
Feature CAM: Interpretable AI in Image Classification
Clement, Frincy, Yang, Ji, Cheng, Irene
Deep Neural Networks have often been called the black box because of the complex, deep architecture and non-transparency presented by the inner layers. There is a lack of trust to use Artificial Intelligence in critical and high-precision fields such as security, finance, health, and manufacturing industries. A lot of focused work has been done to provide interpretable models, intending to deliver meaningful insights into the thoughts and behavior of neural networks. In our research, we compare the state-of-the-art methods in the Activation-based methods (ABM) for interpreting predictions of CNN models, specifically in the application of Image Classification. We then extend the same for eight CNN-based architectures to compare the differences in visualization and thus interpretability. We introduced a novel technique Feature CAM, which falls in the perturbation-activation combination, to create fine-grained, class-discriminative visualizations. The resulting saliency maps from our experiments proved to be 3-4 times better human interpretable than the state-of-the-art in ABM. At the same time it reserves machine interpretability, which is the average confidence scores in classification.
LymphoML: An interpretable artificial intelligence-based method identifies morphologic features that correlate with lymphoma subtype
Shankar, Vivek, Yang, Xiaoli, Krishna, Vrishab, Tan, Brent, Silva, Oscar, Rojansky, Rebecca, Ng, Andrew, Valvert, Fabiola, Briercheck, Edward, Weinstock, David, Natkunam, Yasodha, Fernandez-Pol, Sebastian, Rajpurkar, Pranav
The accurate classification of lymphoma subtypes using hematoxylin and eosin (H&E)-stained tissue is complicated by the wide range of morphological features these cancers can exhibit. We present LymphoML - an interpretable machine learning method that identifies morphologic features that correlate with lymphoma subtypes. Our method applies steps to process H&E-stained tissue microarray cores, segment nuclei and cells, compute features encompassing morphology, texture, and architecture, and train gradient-boosted models to make diagnostic predictions. LymphoML's interpretable models, developed on a limited volume of H&E-stained tissue, achieve non-inferior diagnostic accuracy to pathologists using whole-slide images and outperform black box deep-learning on a dataset of 670 cases from Guatemala spanning 8 lymphoma subtypes. Using SHapley Additive exPlanation (SHAP) analysis, we assess the impact of each feature on model prediction and find that nuclear shape features are most discriminative for DLBCL (F1-score: 78.7%) and classical Hodgkin lymphoma (F1-score: 74.5%). Finally, we provide the first demonstration that a model combining features from H&E-stained tissue with features from a standardized panel of 6 immunostains results in a similar diagnostic accuracy (85.3%) to a 46-stain panel (86.1%).
- North America > Guatemala (0.24)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Europe > Germany (0.04)
- Research Report > New Finding (0.67)
- Research Report > Experimental Study > Negative Result (0.46)
ODTlearn: A Package for Learning Optimal Decision Trees for Prediction and Prescription
Vossler, Patrick, Aghaei, Sina, Justin, Nathan, Jo, Nathanael, Gómez, Andrés, Vayanos, Phebe
ODTLearn is an open-source Python package that provides methods for learning optimal decision trees for high-stakes predictive and prescriptive tasks based on the mixed-integer optimization (MIO) framework proposed in (Aghaei et al., 2019) and several of its extensions. The current version of the package provides implementations for learning optimal classification trees, optimal fair classification trees, optimal classification trees robust to distribution shifts, and optimal prescriptive trees from observational data. We have designed the package to be easy to maintain and extend as new optimal decision tree problem classes, reformulation strategies, and solution algorithms are introduced. To this end, the package follows object-oriented design principles and supports both commercial (Gurobi) and open source (COIN-OR branch and cut) solvers.
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- North America > United States > Texas > Travis County > Austin (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Decision Tree Learning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Diagnosis (0.90)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Object-Oriented Architecture (0.69)
The white-box model approach aims for interpretable AI
When building machine learning models or algorithms, developers should adhere to the principle of interpretability so that they and their intended users know exactly how the inputs and inner workings achieve outputs. Interpretable AI is a book written by Ajay Thampi, a machine learning engineer at Meta, and its second chapter explains the white-box model approach to machine learning as well as examples of white-box models. These models are interpretable, as they feature easy-to-understand algorithms that show how data inputs achieve outputs or target variables. Thampi walks readers through three types of white-box models in this chapter and how they are applied: linear regression, generalized additive models (GAMs) and decision trees. Given the term regression in machine learning refers to models and algorithms taking data and learning relationships within that data to make predictions, the premise of a linear regression model is that a target prediction variable can be determined as a linear combination of every input variable.
How artificial intelligence can improve patient care
AI represents a set of technologies that consist of automated systems able to perform tasks including visual perceptions, augmenting diagnostics and predictions, and seamlessly processing large quantities of data. Tasks performed by AI have promising applications in value-based care, including strengthening patient care and enhancing health outcomes. Data overload is an escalating problem afflicting health care systems across the continuum. Interpretable AI processes can simplify colossal amounts of complex data and synthesize key facets of the data for analysis by the proper specialist with recommendations and insights. This ability to digest and streamline data maximizes the valuable time a doctor can spend with patients.
Cybertrust: From Explainable to Actionable and Interpretable AI (AI2)
Galaitsi, Stephanie, Trump, Benjamin D., Keisler, Jeffrey M., Linkov, Igor, Kott, Alexander
To benefit from AI advances, users and operators of AI systems must have reason to trust it. Trust arises from multiple interactions, where predictable and desirable behavior is reinforced over time. Providing the system's users with some understanding of AI operations can support predictability, but forcing AI to explain itself risks constraining AI capabilities to only those reconcilable with human cognition. We argue that AI systems should be designed with features that build trust by bringing decision-analytic perspectives and formal tools into AI. Instead of trying to achieve explainable AI, we should develop interpretable and actionable AI. Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations. In doing so, it will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making and ensure broad benefits from deploying and advancing its computational capabilities.
- North America > United States > Massachusetts > Suffolk County > Boston (0.14)
- North America > United States > Maryland > Prince George's County > Adelphi (0.05)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Massachusetts > Middlesex County > Concord (0.04)
- Government > Military (1.00)
- Information Technology > Security & Privacy (0.69)
Can AI put humans back in the loop? ZDNet
Is it possible to make artificial intelligence more trustworthy by inserting a human being into the decision process of machine learning? It may be, but you don't get something for nothing. That human being better be an individual who knows a lot about what the neural network is trying to figure out. And that presents a conundrum, given that one of the main promises of AI is precisely to find out things humans don't know. It's a conundrum that is sidestepped in a new bit of AI work by scientists at the Technische Universität Darmstadt in Germany.
What is explainable AI?
Artificial intelligence doesn't need any extra fuel for the myths and misconceptions that surround it. Consider the phrase "black box" – its connotations are equal parts mysterious and ominous, the stuff of "The X Files" more than the day-to-day business of IT. Yet it's true that AI systems, such as machine learning or deep learning, take inputs and then produce outputs (or make decisions) with no decipherable explanation or context. The system makes a decision or takes some action, and we don't necessarily know why or how it arrived at that outcome. The system just does it.